1,200 research outputs found

    Seeing things

    Get PDF
    This paper is concerned with the problem of attaching meaningful symbols to aspects of the visible environment in machine and biological vision. It begins with a review of some of the arguments commonly used to support either the 'symbolic' or the 'behaviourist' approach to vision. Having explored these avenues without arriving at a satisfactory conclusion, we then present a novel argument, which starts from the question : given a functional description of a vision system, when could it be said to support a symbolic interpretation? We argue that to attach symbols to a system, its behaviour must exhibit certain well defined regularities in its response to its visual input and these are best described in terms of invariance and equivariance to transformations which act in the world and induce corresponding changes of the vision system state. This approach is illustrated with a brief exploration of the problem of identifying and acquiring visual representations having these symmetry properties, which also highlights the advantages of using an 'active' model of vision

    Fairplay or Greed: Mandating University Responsibility Toward Student Inventors

    Get PDF
    Over twenty years have passed since the enactment of The Patent and Trademark Law Amendments Act (Bayh-Dole Act) and universities continue to struggle with their technology transfer infrastructures. Lost in that struggle are those who could be considered the backbone of university research: the students. Graduate and undergraduate students remain baffled by the patent assignment and technology transfer processes within their various institutions. Efforts should be undertaken by universities to clarify the student\u27s position in the creative process

    Can parametric statistical methods be trusted for fMRI based group studies?

    Full text link
    The most widely used task fMRI analyses use parametric methods that depend on a variety of assumptions. While individual aspects of these fMRI models have been evaluated, they have not been evaluated in a comprehensive manner with empirical data. In this work, a total of 2 million random task fMRI group analyses have been performed using resting state fMRI data, to compute empirical familywise error rates for the software packages SPM, FSL and AFNI, as well as a standard non-parametric permutation method. While there is some variation, for a nominal familywise error rate of 5% the parametric statistical methods are shown to be conservative for voxel-wise inference and invalid for cluster-wise inference; in particular, cluster size inference with a cluster defining threshold of p = 0.01 generates familywise error rates up to 60%. We conduct a number of follow up analyses and investigations that suggest the cause of the invalid cluster inferences is spatial auto correlation functions that do not follow the assumed Gaussian shape. By comparison, the non-parametric permutation test, which is based on a small number of assumptions, is found to produce valid results for voxel as well as cluster wise inference. Using real task data, we compare the results between one parametric method and the permutation test, and find stark differences in the conclusions drawn between the two using cluster inference. These findings speak to the need of validating the statistical methods being used in the neuroimaging field

    Cluster Failure Revisited: Impact of First Level Design and Data Quality on Cluster False Positive Rates

    Full text link
    Methodological research rarely generates a broad interest, yet our work on the validity of cluster inference methods for functional magnetic resonance imaging (fMRI) created intense discussion on both the minutia of our approach and its implications for the discipline. In the present work, we take on various critiques of our work and further explore the limitations of our original work. We address issues about the particular event-related designs we used, considering multiple event types and randomisation of events between subjects. We consider the lack of validity found with one-sample permutation (sign flipping) tests, investigating a number of approaches to improve the false positive control of this widely used procedure. We found that the combination of a two-sided test and cleaning the data using ICA FIX resulted in nominal false positive rates for all datasets, meaning that data cleaning is not only important for resting state fMRI, but also for task fMRI. Finally, we discuss the implications of our work on the fMRI literature as a whole, estimating that at least 10% of the fMRI studies have used the most problematic cluster inference method (P = 0.01 cluster defining threshold), and how individual studies can be interpreted in light of our findings. These additional results underscore our original conclusions, on the importance of data sharing and thorough evaluation of statistical methods on realistic null data

    Reply to Chen et al.: Parametric methods for cluster inference perform worse for two-sided t-tests

    Full text link
    One-sided t-tests are commonly used in the neuroimaging field, but two-sided tests should be the default unless a researcher has a strong reason for using a one-sided test. Here we extend our previous work on cluster false positive rates, which used one-sided tests, to two-sided tests. Briefly, we found that parametric methods perform worse for two-sided t-tests, and that non-parametric methods perform equally well for one-sided and two-sided tests

    Gaussian process regression can turn non-uniform and undersampled diffusion MRI data into diffusion spectrum imaging

    Full text link
    We propose to use Gaussian process regression to accurately estimate the diffusion MRI signal at arbitrary locations in q-space. By estimating the signal on a grid, we can do synthetic diffusion spectrum imaging: reconstructing the ensemble averaged propagator (EAP) by an inverse Fourier transform. We also propose an alternative reconstruction method guaranteeing a nonnegative EAP that integrates to unity. The reconstruction is validated on data simulated from two Gaussians at various crossing angles. Moreover, we demonstrate on non-uniformly sampled in vivo data that the method is far superior to linear interpolation, and allows a drastic undersampling of the data with only a minor loss of accuracy. We envision the method as a potential replacement for standard diffusion spectrum imaging, in particular when acquistion time is limited.Comment: 5 page

    On the Property Rights System of the State Enterprises in China

    Get PDF
    Detailed analysis of spinal deformity is important within orthopaedic healthcare, in particular for assessment of idiopathic scoliosis. This paper addresses this challenge by proposing an image analysis method, capable of providing a full three-dimensional spine characterization. The proposed method is based on the registration of a highly detailed spine model to image data from computed tomography. The registration process provides an accurate segmentation of each individual vertebra and the ability to derive various measures describing the spinal deformity. The derived measures are estimated from landmarks attached to the spine model and transferred to the patient data according to the registration result. Evaluation of the method provides an average point-to-surface error of 0.9 mm ± 0.9 (comparing segmentations), and an average target registration error of 2.3 mm ± 1.7 (comparing landmarks). Comparing automatic and manual measurements of axial vertebral rotation provides a mean absolute difference of 2.5° ± 1.8, which is on a par with other computerized methods for assessing axial vertebral rotation. A significant advantage of our method, compared to other computerized methods for rotational measurements, is that it does not rely on vertebral symmetry for computing the rotational measures. The proposed method is fully automatic and computationally efficient, only requiring three to four minutes to process an entire image volume covering vertebrae L5 to T1. Given the use of landmarks, the method can be readily adapted to estimate other measures describing a spinal deformity by changing the set of employed landmarks. In addition, the method has the potential to be utilized for accurate segmentations of the vertebrae in routine computed tomography examinations, given the relatively low point-to-surface error

    Bayesian uncertainty quantification in linear models for diffusion MRI

    Full text link
    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification.Comment: Added results from a group analysis and a comparison with residual bootstra

    Dynamic Investigative Practice at the International Criminal Court

    Get PDF
    The Direct Weight Optimization (DWO) approach is a nonparametric estimation approach that has appeared in recent years within the field of nonlinear system identification. In previous work, all function classes for which DWO has been studied have included only continuous functions. However, in many applications it would be desirable also to be able to handle discontinuous functions. Inspired by the bilateral filter method from image processing, such an extension of the DWO framework is proposed for the smoothing problem. Examples show that the properties of the new approach regarding the handling of discontinuities are similar to the bilateral filter, while at the same time DWO offers a greater flexibility with respect to different function classes handled
    • …
    corecore